Cubic regularization of Newton’s method for convex problems with constraints
نویسنده
چکیده
In this paper we derive efficiency estimates of the regularized Newton’s method as applied to constrained convex minimization problems and to variational inequalities. We study a one-step Newton’s method and its multistep accelerated version, which converges on smooth convex problems as O( 1 k3 ), where k is the iteration counter. We derive also the efficiency estimate of a second-order scheme for smooth variational inequalities. Its global rate of convergence is established on the level O( 1 k ).
منابع مشابه
Optimal Newton-type methods for nonconvex smooth optimization problems
We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α−Hölder continuous...
متن کاملConvex Surface Visualization Using Rational Bi- cubic Function
The rational cubic function with three parameters has been extended to rational bi-cubic function to visualize the shape of regular convex surface data. The rational bi-cubic function involves six parameters in each rectangular patch. Data dependent constraints are derived on four of these parameters to visualize the shape of convex surface data while other two are free to refine the shape of s...
متن کاملStochastic Variance-Reduced Cubic Regularized Newton Method
We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracl...
متن کاملErratum to: A regularized Newton method without line search for unconstrained optimization
For unconstrained optimization, Newton-type methods have good convergence properties, and areused in practice. The Newton’s method combined with a trust-region method (the TR-Newtonmethod), the cubic regularization of Newton’s method and the regularized Newton method withline search methods are such Newton-type methods. The TR-Newton method and the cubic regu-larization of N...
متن کاملLANCS Workshop on Modelling and Solving Complex Optimisation Problems
Towards optimal Newton-type methods for nonconvex smooth optimization Coralia Cartis Coralia.Cartis (at) ed.ac.uk School of Mathematics, Edinburgh University We show that the steepest-descent and Newton methods for unconstrained non-convex optimization, under standard assumptions, may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent’s global...
متن کامل